Notes � Snowdon, Mind VIII, consciousness

Greg Detre

Friday, 01 December, 2000

Paul Snowdon, Mind VIII

 

Notes � Snowdon, Mind VIII, consciousness1

Essay titles2

Snowdon2

Mine2

Reading list 2

Snowdon2

Mine2

Discarded3

Notes � Rorty, �Consciousness, intentionality and the philosophy of mind�3

Notes � Searle, �Minds, brains and programs�4

The Systems Reply (Berkeley) 6

The Robot Reply (Yale) 7

The Brain Simulator Reply (Berkeley and M.I.T.) 7

The Combination Reply (Berkeley and Stanford) 7

The Other Minds Reply (Yale) 7

The Many Mansions Reply (Berkeley) 7

Back to the original claim�� 7

Notes � Nagel, �Panpsychism�7

Notes � McGinn, �Can we solve the mind-body problem?�Error! Bookmark not defined.

Notes � Nagel, �What is it like to be a bat?�7

Points7

Old misc7

Theories to consider7

New�� 7

Funny quotes7

Excerpts7

Chrucky, �Interview with Chalmers�, for Philosophy Now (1998) 7

Sprigge, �Panpsychism� in Routledge Encyclopedia of Philosophy, 19987

Abstract from Searle, �Minds, brains and programs�7

Questions7

Materialism and functionalism�� 7

Nagel + Sprigge - Panpsychism�� 7

Chalmers7

Dennett 7

Nagel - What is it like to be a bat?7

Rorty � consciousness + intentionality7

Searle � minds, brains and programs7

Discarded7

Structure7

To do7

 

Essay titles

Snowdon

Is consciousness what makes the mind-body problem difficult?

Can philosophers say anything interesting about consciousness at all?

Can we say that experiences possess qualia?

Is it illuminating to say that experiences are like something to undergo?

Why cannot conscious states be reduced to physical states?

Mine

What do we know about consciousness? What can we know?

Reading list

Snowdon

Davies, M �The philosophy of mind�, section 3 in Grayling (ed), Philosophy

O�Shaughnessy, �Consciousness�, in Midwest studies in Philosophy, Vol X

Flanagan, �Consciousness reconsidered�, chs 2, 4, 6, and 7

Mine

Chalmers, �Facing up to the problem of consciousness�, in �Journal of Consciousness Studies� (1995)

Davidson, �Mental events�, in Actions and events

Dennett (1991), Consciousness explained�, ch 5

Dennett, �Taking the first-person point of view in consciousness�

Dreyfus, What computers still can�t do

Jackson, �Epiphenomenal qualia�

Jeffrey, M (1999), The Human Computer

McGinn, �Can we solve the mind-body problem?�

Nagel (1974), �What is it like to be a bat?�

Nagel, �Panpsychism� in Mortal questions

Nagel, �Conceiving the impossible and the mind-body problem�

Rorty, �Consciousness, intentionality and the philosophy of mind� in Warner and Szubka (eds), �The mind-body problem� (1994)

Searle, John (1980), "Minds, Brains and Programs," Behavioral and Brain Sciences, 3, pp.417-58.

 

Blakemore

Churchland

Greenfield

Penrose/Swinburne

Discarded

Rorty, (1991), Objectivity, Relativism and Truth (includes �Non-reductive physicalism�)

Notes � Rorty, �Consciousness, intentionality and the philosophy of mind�

Rorty begins by considering Ryle, who tried to show that Cartesian philosophy of mind is founded on �misunderstandings of the logic of our language�. Ryle�s central argument was that we had misconceived the �logic� of such words as �belief�, �sensation�, conscious� etc. He thought that the traditional, Cartesian, theory of mind, had

�misconstrued the type-distinction between disposition and exercise into its mythical bifurcation of unwitnessable mental causes and their witnessable physical effects� (pg 32).

Ryle�s attempt to do philosophy of mind as �conceptual analysis� was founded on the pre-Quinean idea that philosophical puzzles arose out of �misunderstandings of the logic of our language�. This idea of a �logic of our language� fell into disrepute under Quine and Wittgenstein, and Ryle failed to adequately explain why the �absurd category mistakes� of Cartesianism are so alluring, with its nonspatial causal mechanisms, and confusion of dispositions with innerevents. Ryle�s counter-intuitive claim is that �the mind� is simply the product of a misleading way of describing the organism�s behaviour.

Then came Armstrong and Smart, the �central-state materialists�. Rather than trying to conceptually analyse away mental states, they sought to show how the vocabulary of mental states could be interpreted as refering to the scientific/empirically-investigable processes of brain states. Armstrong was aware that:

although Behaviourism may be a satisfactory account of themind from an other-person point of view, it will not do as a first-person account � We are conscious, we have experiences. Now can we say that to be conscious, to have experiences, is simply for something to go on within us apt for the causing of certain sorts of behaviour (Armstrong, 1980, pg 197)

Armstrong responded that:

Consciousness is a self-scanning mechanism in the central nervous system (Armstrong, pg 199)

i.e. that we should:

interpret the notion of experience, of consciousness, as that of acquisition of beliefs about our inner states (Rorty)

Then, Putnam argued that different states of a brain could cause the same behavioural disposition � the same mental states could occur independently of any particular physiological event. Putnam�s functionalism considered:

mental states as functional states, states which mediate between input and output in the way in which program states of computers do and which can, like program states, be viewed as symbolic representations of states of affairs � as sentential attitudes

i.e. it�s what a mental state does, rather than how it is represented or how the mechanisms actually work, that define it. So, I could consider mental states to be black boxes, employing neurons, silicon, computer hardware or whatever, to achieve a given functional result. This was Putnam�s answer to �How can we make sense of the various research programs upon which cognitive psychologists are embarked?�

A functionalist believes that all the notion of �conscious experience� amounts to is being able to represent symbolically their own symbolic representations, e.g. whether computers are conscious is simply a case of being programmed to report on their own program states.

As Searle has rightly said, the entire history of philosophy of mind since Ryle is marked by a refusal to take consciousness seriously, or, more exactly, by an insistence on taking �conscious experience� to be a matter of having beliefs.

Or, in Dennett�s words:

we have access � conscious access � to the results of mental processes, but not to the prcesses themselves �

propositional episodes, these thinkings that p, are our normal and continuous avenue to self-knowledge, � they exhaust our immediate awareness (Brainstorms, pg 165)

Rorty then considers Dennett in the light of Locke, Quine, Aristotle, Galileo and Wittgenstein???

A defender of the subjective realm such as Nagel must grant that in general, whether or not it was like something to be x, whether or not the subject experienced being x � questions that define the subjective realm � are questions about which the subject�s subsequent subjective opinion is not authoritative. But if the subject�s own convictions do not settle the matter, and if, as Nagel holds, no objective considerations are conclusive either, the subjective realm floats out of ken altogether, except perhaps for the subject�s convictions about the specious present (Brainstorms, pg 143)

Rorty makes a division between Ryle, Armstrong, Putnam and Dennett on the one hand, and Nagel, Kripke and Searle on the other???

Consciousness is what makes the mind-body problem really intractable.

Certainly it appears unlikely that we will get closer to the real nature of human experience by leaving behind the particularity of our human point of view and striving for a description in terms accessible to beings that could not imagine what it was like to be us (�What is it like to be a bat?�/ 1980, pg 164)

For Nagel�s and Searle�s opponents however, all this talk of �what it is like�, �real nature� and �close scrutiny� is pre-Galilean obscurantism. For them, mind is whatever psychology studies, and psychology is a discipline which finds lawlike relationships between public events. No correlations of this sort are going to involve close scrutiny of what it is like to have a pain, much less of what it is like to have a belief. (Rorty)

He considers Dennett and Davidson together as saying pretty much everything he would like to say. He ends by considering physicalism, pragmatism, holism and anti-representationalism???

Notes � Searle, �Minds, brains and programs�

He starts by distinguishing between strong AI and weak AI. Weak AI, copmuters are simply powerful tools in our study of the mind, enabling us to �formulate and test hypotheses in a more rigorous fashion�, presumably through computationally modelling cognitive processes. According to strong AI, �an appropriately programmed computer really is a mind�, i.e. it can be literally said to understand and have cognitive states.

He focuses on Schank and Abelson (1977), but believes that the same arguments would apply to SHRDLU (Winograd 1973), ELIZA (Weizenbaum 1965) and �indeed any Turing machine simulation of human mental phenemona�.

Schank�s program has a �representation� of the sort of information that human beings have about restaurants, by means of which they can answer questions about a story told about a restaurant (even if the information was never explicitly stated). Searle is arguing against the claims that:

  1. the program can be literally said to understand the story providing the answers
  2. what the program is doing explains humans� ability.

He describes the now famous example of a person in a room (who knows no Chinese) being fed �scripts�, �stories� and �questions�, all in Chinese, with rules in English detailing operations to perform on the Chinese script.

Suppose that I�m locked in a room and given a large batch of Chinese writing. Suppose furthermore (as is indeed the case) that I know no Chinese, either written or spoken, and that I�m not even confident that I could recognize Chinese writing as Chinese writing distinct from, say, Japanese writing or meaningless squiggles. To me, Chinese writing is just so many meaningless squiggles. Now suppose further that after this first batch of Chinese writing I am given a second batch of Chinese script together with a set of rules for correlating the second batch with the first batch. The rules are in English, and I understand these rules as well as any other native speaker of English. They enable me to correlate one set of formal symbols with another set of formal symbols, and all that "formal" means here is that I can identify the symbols entirely by their shapes. Now suppose also that I am given a third batch of Chinese symbols together with some instructions, again in English, that enable me to correlate elements of this third batch with the first two batches, and these rules instruct me how to give back certain Chinese symbols with certain sorts of shapes in response to certain sorts of shapes given me in the third batch. Unknown to me, the people who are giving me all of these symbols call the first batch a "script," they call the second batch a "story," and they call the third batch "questions." Furthermore, they call the symbols I give them back in response to the third batch "answers to the questions," and the set of rules in English that they gave me, they call the "program." Now just to complicate the story a little, imagine that these people also give me stories in English, which I understand, and they then ask me questions in English about these stories, and I give them back answers in English. Suppose also that after a while I get so good at following the instructions for manipulating the Chinese symbols and the programmers get so good at writing the programs that from the external point of view�that is, from tile point of view of somebody outside the room in which I am locked�my answers to the questions are absolutely indistinguishable from those of native Chinese speakers. Nobody just looking at my answers can tell that I don�t speak a word of Chinese. Let us also suppose that my answers to the English questions are, as they no doubt would be, indistinguishable from those of other native English speakers, for the simple reason that I am a native English speaker. From the external point of view�from the point of view of someone reading my "answers"�the answers to the Chinese questions and the English questions are equally good. But in the Chinese case, unlike the English case, I produce the answers by manipulating uninterpreted formal symbols. As far as the Chinese is concerned, I simply behave like a computer; I perform computational operations on formally specified elements. For the purposes of the Chinese, I am simply an instantiation of the computer program.

He argues that even if he is able to produce Chinese writing that is indistinguishable from a native speaker�s, he is not understanding a word of Chinese. Moreover, this is not how we do it.

In the Chinese case, I have evcerything that AI can put into me by way of a program, and I understand nothing; in the English case I understand everything, and there is so far no reason at all to suppose that my understanding has anything to do with computer programs, that is, with computational operations on purely formally specified elements. As long as the program is defined in terms of computational operations on purely formally defined elements, what the example suggests is that these by themselves have no interesting connection with understanding.

no reason has been given to suppose that when I understand English I am operating with any formal program at all

Searle says that he is often criticised for using understanding as a simple two-place predicate, and that it is often not at all easy to tell whether x understands y, but:

There are clear cases in which "understanding" literally applies and clear cases in which it does not apply; and these two sorts of cases are all I need for this argument

The Systems Reply (Berkeley)

"While it is true that the individual person who is locked in the room does not understand the story, the fact is that he is merely part of a whole system, and the system does understand the story ... understanding is not being ascribed to the mere individual; rather it is being ascribed to this whole system of which he is a part."

Searle responds by making the individual memorise absolutely everything, internalising the entire system � yet, presumably, if the individual is simply following the rules, he understands no more Chinese than before. He paraphrases the Systems argument as:

"the man as a formal symbol manipulation system" really does understand Chinese. The subsystem of the man that is the formal symbol manipulation system for Chinese should not be confused with the subsystem for English.

So there are really two subsystems in the man; one understands English, the other Chinese, and "it�s just that the two systems have little to do with each other." But, I want to reply, not only do they have little to do with each other, they are not even remotely alike.

Given that there are these two systems, very different, but both passing the Turing test, this calls the Turing test into question, �since � the system in me that understands English has a great deal more than the system that merely processes Chinese�.

He thinks that the Systems reply leads the absurd conclusion that all sorts of non-cognitive systems are going to turn out to be cognitive, e.g. his stomach, which is doing information processing (since in terms of information, the food means as much to his stomach as the meaningless Chinese squiggles being inputted do to the person in the Chinese room ( the programmers� greater knowledge doesn�t count)).

Similarly:

If strong AI is to be a branch of psychology, then it must be able to distinguish those systems that are genuinely mental from those that are not � otherwise it will offer us no explanations of what is specifically mental about the mental � And the mental-nonmental distinction cannot be just in the eye of the beholder but it must be intrinsic to the systems;

He takes McCarthy�s statement to be a reductio ad absurdium:

"Machines as simple as thermostats can be said to have beliefs, and having beliefs seems to be a characteristic of most machines capable of problem solving performance" (McCarthy 1979).

He ridicules the idea that a thermostat could have real beliefs:

real beliefs, beliefs with direction of fit, propositional content, and conditions of satisfaction; beliefs that had the possibility of being strong beliefs or weak beliefs; nervous, anxious, or secure beliefs; dogmatic, rational, or superstitious beliefs; blind faiths or hesitant cogitations; any kind of beliefs.

notice that its truth would be fatal to strong AI's claim to be a science of the mind. For now the mind is everywhere. What we wanted to know is what distinguishes the mind from thermostats and livers.

The Robot Reply (Yale)

What if the computer was embodied within a robot, with motor output and sensory input:

��The robot would, for example, have a television camera attached to it that enabled it to see, it would have arms and legs that enabled it to 'act,' and all of this would be controlled by its computer 'brain.' Such a robot would, unlike Schank's computer, have genuine understanding and other mental states.�

Searle replies:

The first thing to notice about the robot reply is that it tacitly concedes that cognition is not solely a matter of formal symbol manipulation, since this reply adds a set of causal relations with the outside world (cf. Fodor 1980). But the answer to the robot reply is that the addition of such "perceptual" and "motor" capacities adds nothing by way of understanding, in particular, or intentionality, in general, to Schank's original program.

The Brain Simulator Reply (Berkeley and M.I.T.)

"Suppose we design a program that doesn't represent information that we have about the world, such as the information in Schank's scripts, but simulates the actual sequence of neuron firings at the synapses of the brain of a native Chinese speaker when he understands stories in Chinese and gives answers to them. The machine takes in Chinese stories and questions about them as input, it simulates the formal structure of actual Chinese brains in processing these stories, and it gives out Chinese answers as outputs. We can even imagine that the machine operates, not with a single serial program, but with a whole set of programs operating in parallel, in the manner that actual human brains presumably operate when they process natural language. Now surely in such a case we would have to say that the machine understood the stories; and if we refuse to say that, wouldn't we also have to deny that native Chinese speakers understood the stories? At the level of the synapses, what would or could be different about the program of the computer and the program of the Chinese brain?"

Searle argues that this wouldn�t really be strong AI, since he is thinking purely in terms of symbolic AI:

I thought the whole idea of strong AI is that we don't need to know how the brain works to know how the mind works. The basic hypothesis, or so I had supposed, was that there is a level of mental operations consisting of computational processes over formal elements that constitute the essence of the mental and can be realized in all sorts of different brain processes, in the same way that any computer program can be realized in different computer hardwares

imagine that instead of a monolingual man in a room shuffling symbols we have the man operate an elaborate set of water pipes with valves connecting them. When the man receives the Chinese symbols, he looks up in the program, written in English, which valves he has to turn on and off. Each water connection corresponds to a synapse in the Chinese brain, and the whole system is rigged up so that after doing all the right firings, that is after turning on all the right faucets, the Chinese answers pop out at the output end of the series of pipes.

Now where is the understanding in this system? It takes Chinese as input, it simulates the formal structure of the synapses of the Chinese brain, and it gives Chinese as output. But the man certainly doesn't understand Chinese, and neither do the water pipes, and if we are tempted to adopt what I think is the absurd view that somehow the conjunction of man and water pipes understands, remember that in principle the man can internalize the formal structure of the water pipes and do all the "neuron firings" in his imagination. The problem with the brain simulator is that it is simulating the wrong things about the brain. As long as it simulates only the formal structure of the sequence of neuron firings at the synapses, it won't have simulated what matters about the brain, namely its causal properties, its ability to produce intentional states. And that the formal properties are not sufficient for the causal properties is shown by the water pipe example: we can have all the formal properties carved off from the relevant neurobiological causal properties.

The Combination Reply (Berkeley and Stanford)

"While each of the previous three replies might not be completely convincing by itself as a refutation of the Chinese room counterexample, if you take all three together they are collectively much more convincing and even decisive. Imagine a robot with a brain-shaped computer lodged in its cranial cavity, imagine the computer programmed with all the synapses of a human brain, imagine the whole behavior of the robot is indistinguishable from human behavior, and now think of the whole thing as a unified system and not just as a computer with inputs and outputs. Surely in such a case we would have to ascribe intentionality to the system."

He agrees that in terms of external appearances and behaviour, it would indeed be convincing.

If we could build a robot whose behavior was indistinguishable over a large range from human behavior, we would attribute intentionality to it, pending some reason not to. We wouldn't need to know in advance that its computer brain was a formal analogue of the human brain.

He re-uses the Chinese Room argument by saying that if we place a little man inside the robot, doing the usual uninterpreted formal symbol manipulation, we would certainly not ascribe intentionality to the system.

we find it completely natural to ascribe intentionality to members of certain other primate species such as apes and monkeys and to domestic animals such as dogs. The reasons we find it natural are, roughly, two: We can't make sense of the animal's behavior without the ascription of intentionality and we can see that the beasts are made of similar stuff to ourselves

The Other Minds Reply (Yale)

"How do you know that other people understand Chinese or anything else? Only by their behavior. Now the computer can pass the behavioral tests as well as they can (in principle), so if you are going to attribute cognition to other people you must in principle also attribute it to computers."

Searle�s reply to this makes no senwse to me.

This objection really is only worth a short reply. The problem in this discussion is not about how I know that other people have cognitive states, but rather what it is that I am attributing to them when I attribute cognitive states to them. The thrust of the argument is that it couldn't be just computational processes and their output because the computational processes and their output can exist without the cognitive state. It is no answer to this argument to feign anesthesia. In "cognitive sciences" one presupposes the reality and knowability of the mental in the same way that in physical sciences one has to presuppose the reality and knowability of physical objects.

The Many Mansions Reply (Berkeley)

��Whatever these causal processes are that you say are essential for intentionality (assuming you are right), eventually we will be able to build devices that have these causal processes, and that will be artificial intelligence. So your arguments are in no way directed at the ability of artificial intelligence to produce and explain cognition."

I really have no objection to this reply save to say that it in effect trivializes the project of strong AI by redefining it as whatever artificially produces and explains cognition � the original claim made on behalf of artificial intelligence � was a precise, well defined thesis: mental processes are computational processes over formally defined elements

Back to the original claim

I see no reason in principle why we couldn't give a machine the capacity to understand English or Chinese, since in an important sense our bodies with our brains are precisely such machines. But I do see very strong arguments for saying that we could not give such a thing to a machine where the operation of the machine is defined solely in terms of computational processes over formally defined elements; that is, where the operation of the machine is defined as an instantiation of a computer program.

Of course the brain is a digital computer. Since everything is a digital computer, brains are too. The point is that the brain's causal capacity to produce intentionality cannot consist in its instantiating a computer program, since for any program you like it is possible for something to instantiate that program and still not have any mental states. Whatever it is that the brain does to produce intentionality, it cannot consist in instantiating a program since no program, by itself, is sufficient for intentionality.

Notes � Nagel, �Panpsychism�

By panpsychism, I mean the view that the basic physical constituents of the universe have mental properties, whether or not they are parts of living organisms.

1.      material composition

any living organism is a complex material system. no constituents besides matter are needed, and that matter will have had a largely inanimate history before becoming part of us.

2.      non-reductionism

there are mental states (e.g. though, feeling, emotion, sensation, desire) that are not physical properties of the organism (behavioural, physiological or otherwise), and they are not implied by physical properties alone.

3.      Realism

nevertheless, they are properties of the organism (or of the organism with relation to something else), since there is no soul, and they are not properties of nothing at all

4.      non-emergence

there are no truly emergent properties of complex systems. all properties of a complex system that are not relations between it and something else derive from the properties of its constituents and their effects on each other when so combined. emergence is an epistemological condition.

Panpsychism seems to follow from these four premises. If the mental properties of an organism are not implied by any physical properties but must derive from properties of the organism�s constituents, then those constituents must have non-physical properties from which the appearance of mental properties follows when the combination is of the right kind. Since any matter can compose an organism, all matter must have these properties. And since the same matter can be made into different types of organisms with different types of mental life (of which we have encountered only a tiny sample), it must have properties that simply the apparance of different mental phenemona when the matter is combined in different ways.

He considers three problems about the argument:

1.      Why call these inferred properties of matter mental? What is meant by a physical property and why does that concept not apply to them?

He defines the physical as anything derived by explanatory inference from familiar, observed spatio-temporal phenomena. When we try the same path of explanatory inference starting from familiar mental phenomena, we do not arrive at the same general physical properties of matter. Nagel then considers that rather than there being two separate chains of inference, they may have a common source.

2.      What view of causality is involved in the denial of emergence?

The supposition that a diamond or an organism should have truly (not just epistemologically) emergent properties is that those properties appear at certain complex levels of organisation but are not explainable in terms of any more fundamental properties, known or unknown, of the constituents of the system.

Unless we are prepared to accept � that the appearance of mental properties in complex systems has no causal explanation at all, we must take the current epistemological emergence of the mental as a reason to believe that the constitutents have properties of which we are not aware, and which do necessitate these results. � intrinsic properties of the components must be discovered from which the mental properties of the system follow necessarily. This may be unattainable, but if mental phenomena have a causal explanation, such properties must exist, and they will not be physical.

3.      Do the features of mental phenomena that argue against reduction also argue against Realism?

What is the reason to deny that mental properties can be entailed by physical ones? Brings up the �What is it like to be a bat?� argument, that purely physical, objective properties could not give rise to mental, subjective properties. But how can there be a subject � what is the subject, if we reject the idea of a soul, and we refuse to believe that material objects (even complex systems) can be subjects???

He considers Wittgenstein�s view as the only alternative.

the person (or mouse) who is the subject of mental states is not to be identified with an organism or a soul or anything else. He holds that all kinds of familiar propositions about the mental states of individual living beings are true, but that there is almost nothing to be said about what property must be possessed by what thing if one of these ascriptions is to be true. All such specifications of truth conditions are trivial. What can be more fully described, however, are the kinds of circumstances, including evidential grounds, that make the ascription appropriate: criteria rather than truth conditions. For third-person ascriptions, the grounds are behaviour, stimuli, circumstances and testimony. For self-ascriptions no evidential grounds are needed� Mental states are no less real than behaviour, phsyical stimuli and physiological processes. In fact, their situation with respect to one another is symmetrical, because physical processes have mental (specifically observational) criteria just as mental proceses have physical criteria.

Although sympathetic to it, Nagel criticises the Wittgensteinian view as being too dependent on language � what about conscious beings who do not use language? There either is or is not something it is like to be them, independent of our ability to discern it (through behavioural similarities).

Nagel�s position is more realistic than Wittgenstein�s: the question of whether the things coming of the spaceship are conscious (is a meaningful question and) must have an answer (beyond the �possibility of extending mental ascriptions on evidence analagous to the human case�).

He has expressed dissatisfaction with the views that mental states are states of the body, states of the soul, or that all we can say about their essence is to give criteria or conditions for their ascription. Yet there may easily be further possibilities that haven�t been thought of, so Realism (that mental states belong to the organism) is still more plausible than its denial.

On this basis, panpsychism should be added to the list of mutually incompatible and unsatisfactory solutions to the mind-body problem.

Denying:

1.      material composition - leads one to dualism.

2.      non-reductionism - is common now, but not at all plausible.

3.      Realism - is attractive, but requires a viable alternative, �some way of admitting the reality of mental occurrences without ascribing them to either organisms or souls as subjects�.

4.      non-emergence - �involves accepting the existence of irreducible contingent laws connecting complex organic states with mental states � mental states [would have] no causal explanation�.

As for panpsychism, it is difficult to imagine how a chain of explanatory inference could ever get from the mental states of whole animals back to the proto-mental properties of dead matter.

we know so little about how consciousness arises from matter in our own case and that of the animals in which we can identify it that it would be dogmatic to assume that it does not exist in other complex systems, or even in systems the size of a galaxy, as the result of the same basic properties of matter that are responsible for us.

Notes � Nagel, �What is it like to be a bat?�

Consciousness is not like all the other reductions in modern science- it is what makes the mind-body problem intractable.

we have at present no conception of what an explanation of the physical nature of a mental phenomenon would be

But no matter how the form may vary, the fact that an organism has conscious experience at all means, basically, that there is something it is like to be that organism � something it is like for the organism.

There may be further implications about the form of the experience; there may even (though I doubt it) be implications about the behavior of the organism.

the subjective character of experience � is not captured by any of the familiar, recently devised reductive analyses of the mental, for all of them are logically compatible with its absence. It is not analyzable in terms of any explanatory system of functional states, or intentional states, since these could be ascribed to robots or automata that behaved like people though they experienced nothing. It is not analyzable in terms of the causal role of experiences in relation to typical human behavior�for similar reasons.

It is impossible to exclude the phenomenological features of experience from a reduction in the same way that one excludes the phenomenal features of an ordinary substance from a physical or chemical reduction of it�namely, by explaining them as effects on the minds of human observers

If physicalism is to be defended, the phenomenological features must themselves be given a physical account. But when we examine their subjective character it seems that such a result is impossible. The reason is that every subjective phenomenon is essentially connected with a single point of view, and it seems inevitable that an objective, physical theory will abandon that point of view.

Bats aren�t too phylogenetically distant from us, and most people would ascribe them experiences, yet their range of activity and sensory apparatus are so different from ours that it�s like encountering a �fundamentally alien form of life�.

bat sonar, though clearly a form of perception, is not similar in its operation to any sense that we possess, and there is no reason to suppose that it is subjectively like anything we can experience or imagine.

In so far as I can imagine this (which is not very far), it tells me only what it would be like for me to behave as a bat behaves. But that is not the question. I want to know what it is like for a bat to be a bat

The problem is not confined to exotic cases, however, for it exists between one person and another. The subjective character of the experience of a person deaf and blind from birth is not accessible to me, for example, nor presumably is mine to him. This does not prevent us each from believing that the other's experience has such a subjective character.)

we know that while [what it is like to be us] includes an enormous amount of variation and complexity, and while we do not possess the vocabulary to describe it adequately, its subjective character is highly specific, and in some respects describable in terms that can be understood only by creatures like us.

This brings up �the relation between facts on the one hand and conceptual schemes or systems of representation on the other�. He considers that there might be:

humanly inaccessible facts � facts which could not ever be represented or comprehended by human beings, even if the species lasted for ever�simply because our structure does not permit us to operate with concepts of the requisite type.

Reflection on what it is like to be a bat seems to lead us, therefore, to the conclusion that there are facts that do not consist in the truth of propositions expressible in a human language. We can be compelled to recognize the existence of such facts without being able to state or comprehend them.

Whatever may be the status of facts about what it is like to be a human being, or a bat, or a Martian, these appear to be facts that embody a particular point of view. � The point of view in question is not one accessible only to a single individual. Rather it is a type.

It is difficult to understand what could be meant by the objective character of an experience, apart from the particular point of view from which its subject apprehends it. After all, what would be left of what it was like to be a bat if one removed the viewpoint of the bat? But if experience does not have, in addition to its subjective character, an objective nature that can be apprehended from many different points of view, then how can it be supposed that a Martian investigating my brain might be observing physical processes which were my mental processes (as he might observe physical processes which were bolts of lightning), only from a different point of view?

If the subjective character of experience is fully comprehensible only from one point of view, then any shift to greater objectivity�that is, less attachment to a specific viewpoint�does not take us nearer to the real nature of the phenomenon: it takes us farther away from it

In a sense, the seeds of this objection to the reducibility of experience are already detectable in successful cases of reduction; for in discovering sound to be, in reality, a wave phenomenon in air or other media, we leave behind one viewpoint to take up another, and the auditory, human or animal viewpoint that we leave behind remains unreduced.

physicalism is a position we cannot understand because we do not at present have any conception of how it might be true

Usually, when we are told that X is Y we know how it is supposed to be true � We know how both "X" and "Y " refer, and the kinds of things to which they refer, and we have a rough idea how the two referential paths might converge on a single thing, be it an object, a person, a process, an event or whatever. � but that depends on a conceptual or theoretical background and is not conveyed by the 'is' alone.

Strangely enough, we may have evidence for the truth of something we cannot really understand.

Does it make sense, in other words, to ask what my experiences are really like, as opposed to how they appear to me? We cannot genuinely understand the hypothesis that their nature is captured in a physical description unless we understand the more fundamental idea that they have an objective nature (or that objective processes can have a subjective nature).

I should like to close with a speculative proposal. It may be possible to approach the gap between subjective and objective from another direction � a challenge to form new concepts and devise a new method�an objective phenomenology not dependent on empathy or the imagination. Though presumably it would not capture everything, its goal would be to describe, at least in part, the subjective character of experiences in a form comprehensible to beings incapable of having those experiences.

structural features of perception might be more accessible to objective description, even though something would be left out

a phenomenology that is in this sense objective may permit questions about the physical basis of experience to assume a more intelligible form.

Notes � Jeffery, �The human computer�

Introduction

Jeffery dismisses Searle�s Chinese Room with the Systems reply, without considering Searle�s response to it of memorising and so internalising the whole room.

Chapter 4

He argues that people�s conscious experience has no physical manifestation. People cannot be wrong or corrected about their conscious experience, only about how it tallies with reality. For him, consciousness seems to arise out of intelligence. We cannot ask whether another living being has experience of reality, only about the nature (and complexity) of that experience. Awareness is perception of perception. He rejects the Cartesian theatre homunculus as leading to an infinite regress of homunculi each composing the awareness of the one before. Our sense of self arises out of a plurality of our high-level awareness, our entire brains, our entire bodies, and the environment we are in.

W. Grey Walter (1963) did an experiment, with patients choosing to press a button to advance a slide show. He connected an electrode to a part of the motor cortex, so that when it was active, the slide advanced of its own accord. Each time, the slide advanced before the patient had decided to press the button. Jeffery claims that we perceive the activity in the part of our brain related to action, and interpret that as a decision to act.

Points

How it is that anything so remarkable as a state of consciousness comes about as a result of initiating nerve tissue, is just as unaccountable as the appearance of the Djinn, where Aladdin rubbed his lamp in the story� - T. H. Huxley

Old misc

what do we mean by consciousness?

the 3 defining characteristics of mentality

privileged/subjective

non-extended/non-localised

phenomenal quality

multi-dimensional continuum

psycho-physical interaction

physical causal laws

anomalousness of the mental???

could there be zombies?

compare Chalmers� double aspect theory of information with my dualism at the quantum sort of level

Theories to consider

Chalmers� double aspect theory � and my variations

Panpsychism � contrast with above

Dualism

Anomalous monism

Penrose and quantum

Reductive theories

New

what we need is a view that unites a strong commitment to conscious experience with a conception of humans as highly complex computers (and materialism???)

how much would a full theory of consciousness explain in philosophy of mind???

I am looking for some means to discriminate the computation the brain is doing, specifically that of which we have, or corresponds to, conscious experience, which I suspect will be closely related to the neural mechanisms of awareness.

consciousness is a phenomenon that cannot be quantitatively, objectively measured, and so our only evidence for it is entirely privileged, so we don�t know who else has got it, and because it is everything to us, we cannot even define it in ourselves, plus the word is used problematically in different disciplines to different effect.

�genuine� intentionality is an illusion(???). intentionality admits only of an operationalist definition, and can only be ascribed to a system. if there is such a thing as �genuine� intentionality, it is simply a combination of apparent behaviourally-manifested intentionality + consciousness. but we have no way of deciding whether something is conscious.

the idea of a behaviourally-indistinguishable non-conscious person (zombie) makes no sense. consciousness is not epiphenomenal. I refuse to believe it. I am talking the way I do, and behaving the way I do, as a consequence of my conscious experience � I am reporting on my conscious experience, not merely feeling as though I'm reporting on it real-time. but that�s exactly how it would feel.

Funny quotes

The juvenile sea squirt wanders through the sea searching for a suitable rock or hunk of coral to cling to and make its home for life. For this task, it has a rudimentary nervous system. When it finds its spot and takes root, it doesn't need its brain anymore so it eats it! (It's rather like getting tenure.) -- Daniel Dennett, Consciousness Explained_, p. 177

Conscious is when you are aware of something, and conscience is when you wish you weren't.

The greatest of faults is to be conscious of none.

I know of no more encouraging fact than the unquestionable ability of man to elevate his life by a conscious endeavour

Consciousness: that annoying time between naps.

Anyone who goes to a psychiatrist ought to have his head examined. - Samuel Goldwyn

You possess a mind not merely twisted, but actually sprained.

Of all the things I've lost, I miss my mind the most.

Minds are like parachutes - they work only when open.

I think; therefore I am confused.

Excerpts

Chrucky, �Interview with Chalmers�, for Philosophy Now (1998)

Your name is being identified with the distinction between an easy and a hard problem of consciousness. Herbert Feigl made a related distinction in his long essay `The Mental and the Physical'. In his `Postscript after Ten Years', he writes: "Some philosophers feel that the central issue of mind-body problems is that of intentionality (sapience); others see it in the problem of sentience; and still others in the puzzles of selfhood. Although I have focused my attention primarily on the sentience problem, I regard the others as equally important. But I must confess that, as before, the sapience and selfhood issues have vexed me less severely than those of sentience." He and his colleague, Wilfrid Sellars, couched it as the 'intentionality-body problem' (the easy problem) and the `sensorium (or sentience)-body problem' (the hard problem). Did you formulate your distinction independently? Are you making the same sort of distinction in your problems? (interviewer)

Feigl is often assimilated with the early identity theorists such as Place and Smart, but I think this is a mistake. His view is much more radical. Rather than `reducing' consciousness to what we know of the physical, Feigl wants to flesh out our view of the physical so that it can accommodate consciousness. It's not unlike a view put forward by Bertrand Russell a few decades earlier, on which consciousness provides the intrinsic nature of certain physical states that science characterizes only from the outside. Leopold Stubenberg has a nice paper distinguishing the 'Australian' and 'Austrian' versions of the identity theory. Although I'm Australian, I find myself much more in sympathy with the Austrian version! (Chalmers)

What we know now about physics and chemistry suggests that the laws of chemistry are probably a consequence of the laws of physics, although we certainly don't know all the details yet. But even if Broad was right about this, it might not threaten my claim. If he is right, the facts of chemistry aren't deducible from microphysical laws, but they might still be deducible from the totality of microphysical facts. The microphysical facts would tell us just how the particles are behaving when they are in certain configurations, for example, and from there one could figure out the behavior of various chemicals. Something similar holds for biology, I think. So even accepting Broad's view of biology and chemistry, the situation there will be different from the situation involving consciousness. Where consciousness is involved, one doesn't just have new patterns of evolution of existing qualities. One has wholly new qualities involved. These two different sorts of 'emergence' weren't always distinguished by the British emergentists, but I think they need to be kept separate.

You hold that the presence of consciousness does not depend on the components of a structure, but only on their functional organization. So if some system of things, for example, a computer or the people of China (Ned Block's example), exemplified an appropriate functional structure - regardless of the components of that structure - they would possess consciousness. In other words, the emergence of conscious experience depends only on an appropriate functional structure. (interviewer)

I don't think that consciousness can be logically deduced from either structure or function, but it is still closely correlated with these things. Any two physically identical systems in the actual world will have the same state of consciousness, as a matter of natural law. The question then is: what sort of physical factors matter? Is carbon-based biology required for consciousness, or can anything with the right organization be conscious? I hold that what matters is the functional organization. If a silicon system was set up so that its components interacted just like my neurons, it would be conscious just like me.

There are various reasons to think this. First, I can't see that neurons have anything special going for them that silicon doesn't. Second, I'm struck by the fact that the most direct correlations between consciousness and the brain can be cast at the level of information processing - when we find a certain abstract structure in physical colour processing, we also find a similar structure in our conscious colour space. So that suggests that abstract organization is important. And third, I think one can argue that if one's neurons were to be substituted one-by-one with silicon chips, then one's consciousness would stay the same. The alternatives are that it gradually fades across the spectrum, or that it disappears all at once, and I don't think either of those are very plausible. Of course it's an open question, but I think that in a century or two, once we start actually doing this sort of replacement, most people will accept this sort of view.

Sprigge, �Panpsychism� in Routledge Encyclopedia of Philosophy, 1998

things do divide, in common opinion, into those such that there is and those such that there is not something that it is like to be them (though this expression is only an idiomatic pointer to something it requires a certain sophisticated obtuseness to be unable to identify).

Abstract from Searle, �Minds, brains and programs�

This article can be viewed as an attempt to explore the consequences of two propositions.

1.      Intentionality in human beings (and animals) is a product of causal features of the brain.

I assume this is an empirical fact about the actual causal relations between mental processes and brains. It says simply that certain brain rpocesses are sufficient for intentionality.

2.      Instantiating a computer program is never by itself a sufficient condition of intentionality.

Tha main argument of this paper is directed at establishing this claim. The form of the argument is to show how a human agent could instantiate the program and still not have the relevant intentionality. These two propositions have the following consequences.

3.      The explanation of how the brain produces intentionailty cannot be that it does it by instantiating a computer program.

This is a strict logical consequence of 1 and 2.

4.      Any mechanims capable of producing intentionality must havce causal powers equal to those of the brain.

This is meant to be a trivial consequence of 1.

5.      Any attempt literally to create intentionality artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain.

This follows from 2 and 4.

�Could a machine think?� On the argument advanced here, only a machine could think, and only very special kinds of machines, namely brains and machines with internal causal powers equivalent to those of brains. And that is why strong AI has little to tell us about thinking, since it is not about machines but about programs, and no program by itself is sufficient for thinking.

Questions

what are the main problems in consciousness???

mind-body, FW, personal identity/thought-ownership/temporal continuity, quining qualia

what is there to the mind-body problem besides consciousness???

how unified is our consciousness??? or our consciousness of ourselves as selves???

could we not define intentionality as �applicable to a given domain�???

Materialism and functionalism

what�s the difference between functionalism and central-state materialism???

does Dennett fall victim to Searle�s accusal of �taking conscious experience to be a matter of having beliefs�???

what is the link between higher-order beliefs and functionalism � according to Rorty, �functionalists shrug off the question �Are computers conscious?� by saying that computers can be programmed to report on their own program states (Putnam, 1961, pg 148), to represent symbolically their own symbolic representations�???

Nagel + Sprigge - Panpsychism

how do the two levels of consciousness work??? do they interact??? is the second one (weakly) emergent??? what determines that a second level of consciousness should emerge??? what defines the boundaries within which conscious-sums of objects arise???

why does behavioural spontaneity make a difference??? surely purely mental beings could easily be conscious???

can we say anymore about the fundamental panpsychism??? what other fundamental quantities(???) are there in the universe (charge, mass etc.)??? are there any helpful parallels between them and sentience???

might there be two types of consciousness � sentience and conation???

might the level of psychism be measurable??? does it vary according to the nature of the physical substrate??? is there a unit/building block/minimum size of psychism, or is it continuous, like area???

what is the difference between the second kind of pansychism and idealism/Berkleian phenomenalism???

who (list of names) are the two different types of panpsychist philosophers???

Chalmers

how does Chalmers� view on two physically identical systems both necessarily being conscious/not square with zombie twins???

what�s the difference between Chalmers� view and panpsychism???

is Chalmers a property dualist??? what is a property dualist???

Dennett

infallible vs incorrigible

isn�t Dennett�s comment (Brainstorms, pg 165) that �we have access � conscious access � to the results of mental processess, but not to the processes themselves� tantamount to epiphenomenalism???

to what extent is Dennett committing himself to a linguistic consciousness???

does Dennett really believe that �thinkings that p � exhaust our immediate awareness�???

is Dennett happy to be known as an eliminativist???

Nagel - What is it like to be a bat?

how helpful is Nagel�s characterisation of consciousness as �something it is like to be(???)�???

surely the Martian investigating my brain processes could investigate my phenomenology in the same way as when measuring the electrical discharge of the lightning bolt � in fact, in both cases, all that is missing is an understanding of the �mental chemistry� (Nagel), the systematic relationship elevating/translating physical processes into mental???

it may be, of course, that in order to understand the mental process that corresponds to a given physical process outside one�s own brain, one would actually have to be that physical process, but either way, we would be able to see one side of that process, which is something, right???

is one of Nagel�s major points that subjectivity is inherently type/species-specific???

has any progress been made at all on Nagel�s objective phenomenology??? is it in any way similar to Dennett�s heterophenomenology???

how do most philosophers deal with the subjective/objective divide???

Dennett�s an eliminativist, so there is no subjective for him � he�s then able to take an operationalist (i.e. 3rd person) view of consciousness

Chalmers???

Searle???

Davidson???

Rorty � consciousness + intentionality

Is Rorty right about the division he makes between Ryle, Armstrong, Putnam and Dennett on the one hand, and Nagel, Kripke and Searle on the other???

Searle � minds, brains and programs

what is the difference between causal power and (embodied) computation???

what is the difference between a program and a machine???

what is genuine intentionality, and how will we know it if we have it???

what is it about the causal powers of the brain that are special???

according to Searle then, the criteria for having/being a mind is understanding and having cognitive states (and genuine intentionality???), rather than consciousness etc.???

how much does it matter that he is focussing on Turing machines???

is Searle attacking a straw man by focussing on Schank and the like, or does the criticism extend to all similar attempts, of any complexity???

how does he get any better at following the instructions for manipulating the Chinese symbols, and what does it mean for the programmers to get so good at writing the programs???

how much difference does it make if we refuse to accept that he could ever be as good as a native speaker??? does this scupper the whole strong AI enterprise as a waste of time???

why does he add the complication of the English stories and questions???

doesn�t Searle have to talk a little bit about just what sort of rules he is given in English??? are they the sort of formal rules we get in logic??? can there be a complete set of such rules, given that we�ve never formulated them??? are we allowed holistic rules (what do I mean here???)???

is Searle�s example intended to discredit the entire Turing test???

if the machine is passing the Turing story test, for really complicated stories, then I'd be prepared to call it intelligent � the rules would have to be so complex that they might as well be intelligent

what would Searle say if we could actually get a machine to pass his Turing test???

why must my program be restricted to just formal symbol manipulation � if I'm modelling a neural network, it�s not doing this at all, surely???

what is a formal system???

what�s the law of the excluded middle???

he says he acknowledges that understanding isn�t a simple two-place predicate, and yet he says, �There are clear cases in which "understanding" literally applies and clear cases in which it does not apply; and these two sorts of cases are all I need for this argument� � what clear cases???

are all relational predicates intentional??? are all two-place predicates intentional??? (�Intentionality is by definition that feature of certain mental states by which they are directed at or about objects and states of affairs in the world. Thus, beliefs, desires, and intentions are intentional states; undirected forms of anxiety and depression are not. (Searle)).

so what does Searle mean by �understanding� then???

would Dennett say that the Chinese room (the system) was conscious in some way, just by dint of the complexity of operations it�s performing???

when Searle responds to the Systems reply by making the individual memorise all the rules and symbols and do it all in his head to internalise the whole system, I think that the individual would start to learn Chinese. does this make a difference??? would it be question-begging for Searle to respond that the individual�s brain is outside the system???

Searle is caricaturing the formal system as some sort of huge finite state system, when in reality, it would be a prodigiously complicated syntactic/fuzzy system of some kind � rather like a neural network???

if we stop thinking of the brain as just a parallel, distributed-representing computer, does Searle�s argument make more sense???

if we take a holistic view of beliefs (and so intentionality in general), treating them as functional representations, then wouldn�t our formal symbol manipulation, if complex and inter-linked enough, amount to just such a set of inter-defined functional representations???

doesn�t a lot ride on the extent to which language is intermeshed with intentionality???

what does it mean to talk of beliefs having �direction of fit�???

how is the sensory/motor information encoded into Chinese???

assuming Fourier transforms etc. are just more formal systems manipulation, then at what point does the understanding occur in the brain, given that we can trace the low-level image manipulation up to the visual cortex???

what is it about the causal properties that give rise to intentional states � surely the water pipes have causal properties??? if a water-piped Chinese room passed the Turing test, then Searle�s claim that �that the formal properties are not sufficient for the causal properties is shown by the water pipe example� turns out to be begging the question??? how does he define intentionality that excludes such a system??? when he rejects the Combination reply, what more does he want than this, and what more zdo we get when we ascribe consciousness and intentionality to our fellow human beings???

is Searle anti-connectionist AI then at all???

the reason that Searle can�t be convinced by any arguments is that whenever he is asked to consider a system that really appears to be showing intentionality, he speculates that such a system with a little man doing the operations is clearly not intentional, without ever giving us a criteria for just what sort of system would be intentional???

can we work the other way (like Dennett does with creativity and Spakesheare), reducing human input until we have a system that is clearly intentional without a little man???

�The problem in this discussion is not about how I know that other people have cognitive states, but rather what it is that I am attributing to them when I attribute cognitive states to them. The thrust of the argument is that it couldn't be just computational processes and their output because the computational processes and their output can exist without the cognitive state.� eh???

what does �mental processes are computational processes over formally defined elements�, or �where the operation of the machine is defined as an instantiation of a computer program� mean, and if we abandon it as our motto for AI, will Searle go away???

what does it mean when Searle says that an artificially produced machine with a nervous system �sufficiently like ours� could think � how do we tell, and what is the essence of this similarity??? if it�s an �empirical question�, then surely we can deduce principles or laws underlying which causal properties are important, and then model those???

what more do we do when we learn a second language than to memorise the translation rules until they become unconscious??? perhaps though, we do learn to reconnect all our previous experiences to be expressible now in Chinese???

it seems important that Searle�s Chinese Room experiment is about a language-user learning a new language, rather than language itself�

would a cognitive scientist see a fire or rainstorm as doing information processing???

why must AI workers be strong dualists???

what is the difference between a program and a machine??? the formality. what is there about a machine that cannot be expressed formally???

why is intentionality necessarily a biological phenomenon??? how is intentionality related to consciousness in Searle�s view???

is Searle right??? do I actually agree with him without realising, since the AI I have in mind is all about modelling the computation that the physical brain is doing � or is it??? what is the difference between the two???

for me, I see little difference, and it�s all a mystery, so I'm forced to be an operationalist. for him, it�s causal properties. at least he�s confident about the consequences of his views�

see �Jeffery� re finite state machine???

Jeffery

does Searle employ countless preset responses in the Chinese room (i.e. a finite state machine)??? is it ambiguous in the text???

what�s the difference between animate and inanimate???

surely our high-level processing could be seen as �perceiving� the low-level perception??? but there is no single level separating them � they�re all connected to each other

Discarded

Notes � Flanagan, �Consciousness reconsidered�, chapter 2

Can we really say that there is something, i.e. consciousness, for us to discuss?

phlogiston objection

karma objection